DeepGarment : 3D Garment Shape Estimation from a Single Image

نویسندگان

  • R. Danerek
  • Endri Dibra
  • A. Cengiz Öztireli
  • Remo Ziegler
  • Markus H. Gross
چکیده

3D garment capture is an important component for various applications such as free-view point video, virtual avatars, online shopping, and virtual cloth fitting. Due to the complexity of the deformations, capturing 3D garment shapes requires controlled and specialized setups. A viable alternative is image-based garment capture. Capturing 3D garment shapes from a single image, however, is a challenging problem and the current solutions come with assumptions on the lighting, camera calibration, complexity of human or mannequin poses considered, and more importantly a stable physical state for the garment and the underlying human body. In addition, most of the works require manual interaction and exhibit high run-times. We propose a new technique that overcomes these limitations, making garment shape estimation from an image a practical approach for dynamic garment capture. Starting from synthetic garment shape data generated through physically based simulations from various human bodies in complex poses obtained through Mocap sequences, and rendered under varying camera positions and lighting conditions, our novel method learns a mapping from rendered garment images to the underlying 3D garment model. This is achieved by training Convolutional Neural Networks (CNN-s) to estimate 3D vertex displacements from a template mesh with a specialized loss function. We illustrate that this technique is able to recover the global shape of dynamic 3D garments from a single image under varying factors such as challenging human poses, self occlusions, various camera poses and lighting conditions, at interactive rates. Improvement is shown if more than one view is integrated. Additionally, we show applications of our method to videos. This is the authors preprint. The definitive version is available at http://diglib.eg.org/ and http://onlinelibrary.wiley.com/.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Detailed Garment Recovery from a Single-View Image

Most recent garment capturing techniques rely on acquiring multiple views of clothing, which may not always be readily available, especially in the case of pre-existing photographs from the web. As an alternative, we propose a method that is able to compute a 3D model of a human body and its outfit from a single photograph with little human interaction. Our algorithm is not only able to capture...

متن کامل

Garment Modeling from a Single Image

Modeling of realistic garments is essential for online shopping and many other applications including virtual characters. Most of existing methods either require a multi-camera capture setup or a restricted mannequin pose. We address the garment modeling problem according to a single input image. We design an all-pose garment outline interpretation, and a shading-based detail modeling algorithm...

متن کامل

Hand Shape and 3D Pose Estimation Using Depth Data from a Single Cluttered Frame

This paper describes a method that, given an input image of a person signing a gesture in a cluttered scene, locates the gesturing arm, automatically detects and segments the hand and finally creates a ranked list of possible shape class, 3D pose orientation and full hand configuration parameters. The clutter-tolerant hand segmentation algorithm is based on depth data from a single image captur...

متن کامل

Automatic Class-Specific 3D Reconstruction from a Single Image

Our goal is to automatically reconstruct 3D objects from a single image, by using prior 3D shape models of classes. The shape models, defined as a collection of oriented primitive shapes centered at fixed 3D positions, can be learned from a few labeled images for each class. The 3D class model can then be used to estimate the 3D shape of an object instance, including occluded parts, from a sing...

متن کامل

Automatic 3D Face Modeling with Single Image and Perspective Projection Model

In this paper, a precise and robust algorithm is presented to automatically model the 3D face with only single frontal face image. For shape and projection parameters estimation, an energy function is built to represent the difference between the approximate observed face 3D shape and estimated face 3D shape. To achieve more robust results, shape constraint is added to the energy function. For ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Comput. Graph. Forum

دوره 36  شماره 

صفحات  -

تاریخ انتشار 2017